由于非平稳性,现实世界多变量时间序列(MTS)的分布会随着时间而变化,称为分布漂移。大多数现有的MT预测模型都会极大地遭受分销漂移的影响,并随着时间的推移降低了预测性能。现有方法通过适应最新到达数据或根据未来数据得出的元知识进行自我纠正来解决分布漂移。尽管在MT的预测中取得了巨大的成功,但这些方法几乎无法捕获固有的分布变化,尤其是从分布的角度来看。因此,我们提出了一个新型的框架时间条件变化自动编码器(TCVAE),以对MTS中历史观察结果和未来数据之间的动态分布依赖性进行建模,并将依赖性作为时间条件分布推断为利用潜在变量。具体而言,新型的颞鹰注意机制代表了随后馈入馈送前网络的时间因素,以估计潜在变量的先前高斯分布。时间因素的表示进一步动态地调整了基于变压器的编码器和解码器的结构,以利用门控注意机制来变化。此外,我们引入条件连续归一化流量,以将先前的高斯转化为复杂且无形式的分布,以促进对时间条件分布的灵活推断。在六个现实世界MTS数据集上进行的广泛实验表明,与最先进的MTS预测基线相比,TCVAE的出色鲁棒性和有效性。我们进一步说明了TCVAE通过多方面的案例研究和现实情况下的可视化来说明TCVAE的适用性。
translated by 谷歌翻译
本文介绍了Davarocr,这是一种用于OCR和文档理解任务的开源工具箱。Davarocr目前实施19种高级算法,涵盖9个不同的任务表。Davarocr为每种算法提供了详细的用法说明和经过训练的模型。与以前的OpenSource OCR工具箱相比,Davarocr对文档理解的尖端技术的子任务具有相对完整的支持。为了促进OCR技术在学术界和行业中的开发和应用,我们更加关注使用不同的技术可以共享的模块的使用。Davarocr在https://github.com/hikopensource/davar-lab-ocr上公开发行。
translated by 谷歌翻译
价格运动的预测旨在根据当前的市场条件和其他相关信息来预测金融资产的未来趋势。最近,机器学习(ML)方法已经变得越来越流行,并在学术界和工业中都取得了预测的有希望的结果。大多数现有的ML解决方案将预测问题作为分类(预测方向)或回归(以预测回报)问题,以期在整个培训数据集中。但是,由于财务数据的信噪比和随机性质极低,良好的交易机会极为稀缺。结果,如果没有仔细选择潜在的有利可图的样本,这种ML方法容易捕获噪声而不是真实信号的模式。为了解决这个问题,我们提出了一个新颖的价格变动预测框架,称为“地方意识到的关注和迭代精致标签”(LARA),由两个主要组成部分组成:1)局部意识 - 引起关注会自动提取潜在的有利可图的样品,以通过到周围的周围来提取。班级感知标签信息。此外,配备了公制学习技术,当地意识到的注意力享受特定于任务的距离指标,并以更有效的方式分散了对潜在有利可图的样本的关注。 2)迭代精致标签进一步迭代地完善了嘈杂样品的标签,然后结合了学到的预测因子,使其与看不见和嘈杂的样品相结合。在对三个现实世界金融市场的许多实验中:ETF,股票和加密货币,Lara与传统的时间序列分析方法和QLIB平台上的一组基于机器的竞争对手相比,取得了卓越的性能。广泛的消融研究和实验还表明,拉拉确实捕获了更可靠的交易机会。
translated by 谷歌翻译
We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems 1 .
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译
Forecasts by the European Centre for Medium-Range Weather Forecasts (ECMWF; EC for short) can provide a basis for the establishment of maritime-disaster warning systems, but they contain some systematic biases.The fifth-generation EC atmospheric reanalysis (ERA5) data have high accuracy, but are delayed by about 5 days. To overcome this issue, a spatiotemporal deep-learning method could be used for nonlinear mapping between EC and ERA5 data, which would improve the quality of EC wind forecast data in real time. In this study, we developed the Multi-Task-Double Encoder Trajectory Gated Recurrent Unit (MT-DETrajGRU) model, which uses an improved double-encoder forecaster architecture to model the spatiotemporal sequence of the U and V components of the wind field; we designed a multi-task learning loss function to correct wind speed and wind direction simultaneously using only one model. The study area was the western North Pacific (WNP), and real-time rolling bias corrections were made for 10-day wind-field forecasts released by the EC between December 2020 and November 2021, divided into four seasons. Compared with the original EC forecasts, after correction using the MT-DETrajGRU model the wind speed and wind direction biases in the four seasons were reduced by 8-11% and 9-14%, respectively. In addition, the proposed method modelled the data uniformly under different weather conditions. The correction performance under normal and typhoon conditions was comparable, indicating that the data-driven mode constructed here is robust and generalizable.
translated by 谷歌翻译
Pure transformers have shown great potential for vision tasks recently. However, their accuracy in small or medium datasets is not satisfactory. Although some existing methods introduce a CNN as a teacher to guide the training process by distillation, the gap between teacher and student networks would lead to sub-optimal performance. In this work, we propose a new One-shot Vision transformer search framework with Online distillation, namely OVO. OVO samples sub-nets for both teacher and student networks for better distillation results. Benefiting from the online distillation, thousands of subnets in the supernet are well-trained without extra finetuning or retraining. In experiments, OVO-Ti achieves 73.32% top-1 accuracy on ImageNet and 75.2% on CIFAR-100, respectively.
translated by 谷歌翻译
Accurate path following is challenging for autonomous robots operating in uncertain environments. Adaptive and predictive control strategies are crucial for a nonlinear robotic system to achieve high-performance path following control. In this paper, we propose a novel learning-based predictive control scheme that couples a high-level model predictive path following controller (MPFC) with a low-level learning-based feedback linearization controller (LB-FBLC) for nonlinear systems under uncertain disturbances. The low-level LB-FBLC utilizes Gaussian Processes to learn the uncertain environmental disturbances online and tracks the reference state accurately with a probabilistic stability guarantee. Meanwhile, the high-level MPFC exploits the linearized system model augmented with a virtual linear path dynamics model to optimize the evolution of path reference targets, and provides the reference states and controls for the low-level LB-FBLC. Simulation results illustrate the effectiveness of the proposed control strategy on a quadrotor path following task under unknown wind disturbances.
translated by 谷歌翻译
We propose an extrinsic Bayesian optimization (eBO) framework for general optimization problems on manifolds. Bayesian optimization algorithms build a surrogate of the objective function by employing Gaussian processes and quantify the uncertainty in that surrogate by deriving an acquisition function. This acquisition function represents the probability of improvement based on the kernel of the Gaussian process, which guides the search in the optimization process. The critical challenge for designing Bayesian optimization algorithms on manifolds lies in the difficulty of constructing valid covariance kernels for Gaussian processes on general manifolds. Our approach is to employ extrinsic Gaussian processes by first embedding the manifold onto some higher dimensional Euclidean space via equivariant embeddings and then constructing a valid covariance kernel on the image manifold after the embedding. This leads to efficient and scalable algorithms for optimization over complex manifolds. Simulation study and real data analysis are carried out to demonstrate the utilities of our eBO framework by applying the eBO to various optimization problems over manifolds such as the sphere, the Grassmannian, and the manifold of positive definite matrices.
translated by 谷歌翻译